Goto

Collaborating Authors

 Federal District


Kishida to visit France, Brazil and Paraguay starting next week

The Japan Times

Prime Minister Fumio Kishida will visit France, Brazil and Paraguay from Wednesday through May 6, the government said Friday. In Paris on Thursday, Kishida plans to give a keynote speech at a ministerial council meeting of the OECD and meet with French President Emmanuel Macron. The speech will reflect Kishida's intention to lead discussions to resolve socio-economic challenges for the international community, Chief Cabinet Secretary Yoshimasa Hayashi said at a news conference. Kishida is also set to deliver speeches at OECD events themed on generative artificial intelligence and on cooperation with Southeast Asia. In Brasilia on May 3, Kishida will meet with President Luiz Inacio Lula da Silva, this year's chair of the Group of 20 major economies, and hold a joint news conference.


Offshore Wind Plant Instance Segmentation Using Sentinel-1 Time Series, GIS, and Semantic Segmentation Models

arXiv.org Artificial Intelligence

Offshore wind farms represent a renewable energy source with a significant global growth trend, and their monitoring is strategic for territorial and environmental planning. This study's primary objective is to detect offshore wind plants at an instance level using semantic segmentation models and Sentinel-1 time series. The secondary objectives are: (a) to develop a database consisting of labeled data and S-1 time series; (b) to compare the performance of five deep semantic segmentation architectures (U-Net, U-Net++, Feature Pyramid Network - FPN, DeepLabv3+, and LinkNet); (c) develop a novel augmentation strategy that shuffles the positions of the images within the time series; (d) investigate different dimensions of time series intervals (1, 5, 10, and 15 images); and (e) evaluate the semantic-to-instance conversion procedure. LinkNet was the top-performing model, followed by U-Net++ and U-Net, while FPN and DeepLabv3+ presented the worst results. The evaluation of semantic segmentation models reveals enhanced Intersection over Union (IoU) (25%) and F-score metrics (18%) with the augmentation of time series images. The study showcases the augmentation strategy's capability to mitigate biases and precisely detect invariant targets. Furthermore, the conversion from semantic to instance segmentation demonstrates its efficacy in accurately isolating individual instances within classified regions - simplifying training data and reducing annotation effort and complexity.


Brazil's Upcoming Presidential Elections Are the Most Hate-Filled in Recent Memory

Mother Jones

Every other day, my WhatsApp bursts with messages from friends in Brazil and abroad expressing equal parts of excitement and apprehension as Sunday's Brazilian presidential elections approach. On Wednesday, my best friend who lives in the country's capital, Brasília, texted to say she was scared of wearing red clothes to go vote this weekend because red is the color associated with the Worker's Party of former President Luiz Inácio Lula da Silva. Lula, the current front-runner, has a real, if slim, chance to beat far-right incumbent President Jair Bolsonaro in the first round by getting more than 50 percent of valid votes. "The mood is terrible," she wrote, later adding that in the last 48 hours, four instances of political violence had been recorded across the country. My friend's worries are justified.


Nvidia collaborates with the University of Florida to build 700-petaflop AI supercomputer

#artificialintelligence

Nvidia and the University of Florida (UF) today announced plans to build the fastest AI supercomputer in academia. By enhancing the capabilities of UF's existing HiPerGator supercomputer with the DGX SuperPod architecture, Nvidia claims the system -- which it expects will be up and running by early 2021 -- will deliver 700 petaflops (one quadrillion floating point operations per second) of performance. Some researchers within the AI community believe that capable computers, in conjunction with reinforcement learning and other techniques, can achieve paradigm-shifting AI advances. A paper recently published by researchers at the Massachusetts Institute of Technology, MIT-IBM Watson AI Lab, Underwood International College, and the University of Brasilia found that deep learning improvements have been "strongly reliant" on increases in compute. And in 2018, OpenAI researchers released an analysis showing that from 2012 to 2018, the amount of compute used in the largest AI training runs grew more than 300,000 times with a 3.5-month doubling time, far exceeding the pace of Moore's law.


DL Is Not Computationally Expensive By Accident, But By Design

#artificialintelligence

Researchers from MIT recently collaborated with the University of Brasilia and Yonsei University to estimate the computational limits of deep learning (DL). They stated, "The computational needs of deep learning scale so rapidly that they will quickly become burdensome again." The researchers analysed 1,058 research papers from the arXiv pre-print repository and other benchmark references in order to understand how the performance of deep learning techniques depends on the computational power of several important application areas. They stated, "To understand why DL is so computationally expensive, we analyse its statistical as well as computational scaling in theory. We show DL is not computationally expensive by accident, but by design." They added, "The same flexibility that makes it excellent at modelling the diverse phenomena as well as outperforming the expert models also makes it more computationally expensive in nature.


MIT researchers warn that deep learning is reaching its computational limit

#artificialintelligence

The rising demand for Deep Learning is so massive and complex that we are reaching the computational limits of the technology. A recent study suggests that progress in deep learning is heavily dependent on the increase in computational abilities. Researchers from Massachusetts Institute of Technology (MIT), MIT-IBM Watson AI Lab, Underwood International College, and the University of Brasilia found in a recent study that deep learning is strong reliant on the increase in compute. The researchers believe that the continuous progress in Deep Learning will require dramatically more computational methods. In the research paper, co-authors wrote, "We show deep learning is not computationally expensive by accident, but by design. The same flexibility that makes it excellent at modelling diverse phenomena and outperforming expert models also makes it dramatically more computationally expensive. Despite this, we find that the actual computational burden of deep learning models is scaling more rapidly than (known) lower bounds from theory, suggesting that substantial improvements might be possible."


Deep Learning Reaching Computational Limits, Warns New MIT Study

#artificialintelligence

The study states that deep learning's impressive progress has come with a "voracious appetite for computing power." Researchers at the Massachusetts Institute of Technology, MIT-IBM Watson AI Lab, Underwood International College, and the University of Brasilia have found that we are reaching computational limits for deep learning. The new study states that deep learning's progress has come with a "voracious appetite for computing power" and that continued development will require "dramatically" more computationally efficient methods. "We show deep learning is not computationally expensive by accident, but by design. The same flexibility that makes it excellent at modeling diverse phenomena and outperforming expert models also makes it dramatically more computationally expensive," the coauthors wrote.


MIT researchers warn that deep learning is approaching computational limits

#artificialintelligence

That's according to researchers at the Massachusetts Institute of Technology, Underwood International College, and the University of Brasilia, who found in a recent study that progress in deep learning has been "strongly reliant" on increases in compute. It's their assertion that continued progress will require "dramatically" more computationally efficient deep learning methods, either through changes to existing techniques or via new as-yet-undiscovered methods. "We show deep learning is not computationally expensive by accident, but by design. The same flexibility that makes it excellent at modeling diverse phenomena and outperforming expert models also makes it dramatically more computationally expensive," the coauthors wrote. "Despite this, we find that the actual computational burden of deep learning models is scaling more rapidly than (known) lower bounds from theory, suggesting that substantial improvements might be possible."


Robots Are Solving Banks' Very Expensive Research Problem

#artificialintelligence

As lawmakers in Brasilia debated a controversial pension overhaul for months, a robot more than 5,000 miles away in London kept a close eye on all 513 of them. The algorithm, designed by technology startup Arkera Inc., tracked their comments in Brazilian newspapers and government web pages each day to predict the likelihood the bill would pass. Weeks before the legislation cleared its biggest obstacle in July, the machine's data crunching allowed Arkera analysts to predict the result almost to the letter, giving hedge fund clients in New York and London the insight to buy the Brazilian real near eight-month lows in May. It's since rallied more than 8%. This is the kind of edge that a new generation of researchers are betting will upend the research marketplace.


Social Participation Ontology: community documentation, enhancements and use examples

arXiv.org Artificial Intelligence

Participatory democracy advances in virtually all governments and especially in South America which exhibits a mixed culture and social predisposition. This article presents the "Social Participation Ontology" (OPS from the Brazilian name \emph{Ontologia de Participa\c{c}\~ao Social}) implemented in compliance with the Web Ontology Language standard (OWL) for fostering social participation, specially in virtual platforms. The entities and links of OPS were defined based on an extensive collaboration of specialists. It is shown that OPS is instrumental for information retrieval from the contents of the portal, both in terms of the actors (at various levels) as well as mechanisms and activities. Significantly, OPS is linked to other OWL ontologies as an upper ontology and via FOAF and BFO as higher upper ontologies, which yields sound organization and access of knowledge and data. In order to illustrate the usefulness of OPS, we present results on ontological expansion and integration with other ontologies and data. Ongoing work involves further adoption of OPS by the official Brazilian federal portal for social participation and NGO s, and further linkage to other ontologies for social participation.